In this paper, hypernetworks are trained to generate behaviors across a range of unseen task conditions, via a novel TD-based training objective and data from a set of near-optimal RL solutions for training tasks. This work relates to meta RL, contextual RL, and transfer learning, with a particular focus on zero-shot performance at test time, enabled by knowledge of the task parameters (also known as context). Our technical approach is based upon viewing each RL algorithm as a mapping from the MDP specifics to the near-optimal value function and policy and seek to approximate it with a hypernetwork that can generate near-optimal value functions and policies, given the parameters of the MDP. We show that, under certain conditions, this mapping can be considered as a supervised learning problem. We empirically evaluate the effectiveness of our method for zero-shot transfer to new reward and transition dynamics on a series of continuous control tasks from DeepMind Control Suite. Our method demonstrates significant improvements over baselines from multitask and meta RL approaches.
translated by 谷歌翻译
抽象已被广泛研究,以提高增强学习算法的效率和概括。在本文中,我们研究了连续控制环境中的抽象。我们将MDP同态的定义扩展到连续状态空间中的连续作用。我们在抽象MDP上得出了策略梯度定理,这使我们能够利用环境的近似对称性进行策略优化。基于该定理,我们提出了一种能够使用Lax Bisimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation Mimulation。我们证明了我们方法对DeepMind Control Suite中基准任务的有效性。我们的方法利用MDP同态来表示学习的能力会导致从像素观测中学习时的性能。
translated by 谷歌翻译
连续的时间加强学习提供了一种吸引人的形式主义,用于描述控制问题,其中时间的流逝并不自然地分为离散的增量。在这里,我们考虑了预测在连续时间随机环境中相互作用的代理商获得的回报分布的问题。准确的回报预测已被证明可用于确定对风险敏感的控制,学习状态表示,多基因协调等的最佳策略。我们首先要建立汉密尔顿 - 雅各布人(HJB)方程的分布模拟,以扩散和更广泛的feller-dynkin过程。然后,我们将此方程式专注于返回分布近似于$ n $均匀加权粒子的设置,这是分销算法中常见的设计选择。我们的派生突出显示了由于统计扩散率而引起的其他术语,这是由于在连续时间设置中正确处理分布而产生的。基于此,我们提出了一种可访问算法,用于基于JKO方案近似求解分布HJB,该方案可以在在线控制算法中实现。我们证明了这种算法在合成控制问题中的有效性。
translated by 谷歌翻译
在这项工作中,我们研究了Bellman方程作为价值预测准确性的替代目标的使用。虽然Bellman方程在所有状态行动对上都是由真实值函数唯一求解的,但我们发现Bellman误差(方程式两侧之间的差异)对于值函数的准确性而言是一个差的代理。特别是,我们表明(1)由于从钟声方程的两侧取消,贝尔曼误差的幅度仅与到达真实值函数的距离相关,即使考虑了所有状态行动对,并且(2)在有限的数据制度中,可以通过无限多种次优的解决方案来完全满足钟手方程。这意味着可以最大程度地减少Bellman误差,而无需提高价值函数的准确性。我们通过一系列命题,说明性玩具示例和标准基准域中的经验分析来证明这些现象。
translated by 谷歌翻译
我们提出了一种奖励预测,基于模型的深度学习方法,具有轨迹约束的视觉关注,用于Mapless,本地视觉导航任务。我们的方法学会在潜伏图像空间中的位置放置视觉注意,这跟踪车辆控制动作引起的轨迹,以提高规划期间的预测精度。注意模型由任务特定的损耗和额外的轨迹约束损失共同优化,允许适应性令人鼓舞的正则化结构,以改善泛化和可靠性。重要的是,视觉注意力应用于潜在特征映射空间而不是原始图像空间,以促进有效的规划。我们在规划低湍流的视觉导航任务中验证了我们的型号,在越野设置和爬坡地区的越野设置和山坡上爬上山坡的轨迹。实验涉及随机程序生成的模拟和现实世界环境。与关注和自我关注替代方案相比,我们发现我们的方法改善了泛化和学习效率。
translated by 谷歌翻译
Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard offpolicy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.
translated by 谷歌翻译
In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.
translated by 谷歌翻译
In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译